Ground-Aware Point Cloud Semantic Segmentation for Autonomous Driving
Jian Wu Jianbo Jiao Qingxiong Yang Zheng-Jun Zha
Xuejin Chen*
National Engineering Laboratory for Brain-inspired Intelligence Technology and Application
University of Science and Technology of China
ACM Multimedia 2019 |
|
Figure 1: An overview of the proposed ground-aware network for point cloud semantic segmentation.We first roughly extract
ground points by multi-section plane-fitting from the input point cloud. Then the point cloud is partitioned into multiple
regions for local feature extraction. For each region, point features and region features are extracted using MLPs. The roughly
segmented ground and object points are fed into our ground-aware attention module to capture long-range dependencies
between points and result in ground affinity features. By concatenating the affinity feature, point feature, and region feature,
each point is classified into K categories for the final prediction of semantic labels. |
|
Abstract: |
Semantic understanding of 3D scenes is essential for autonomous driving. Although a number of efforts have been devoted to semantic segmentation of dense point clouds, the great sparsity of 3D LiDAR data poses significant challenges in autonomous driving. In this paper, we work on the semantic segmentation problem of
extremely sparse LiDAR point clouds with specific consideration of the ground as reference. In particular, we propose a ground-aware framework that well solves the ambiguity caused by data sparsity.
We employ a multi-section plane fitting approach to roughly extract ground points to assist segmentation of objects on the ground. Based on the roughly extracted ground points, our approach implicitly
integrates the ground information in a weakly-supervised manner and utilizes ground-aware features with a new ground-aware attention module. The proposed ground-aware attention module captures
long-range dependence between ground and objects, which significantly facilitate the segmentation of small objects that only consist of a few points in extremely sparse point clouds. Extensive experiments on two large-scale LiDAR point cloud datasets for autonomous
driving demonstrate that the proposed method achieves state-of-the-art performance both quantitatively and qualitatively. |
|
Results: |
|
Figure 2: Three groups of semantic segmentation results on the 3D-DF dataset using different methods. From left to right:
the input point cloud, the results of PointNet++, PointCNN, SqueezeSegv2, SPG, the
result from our method, and the corresponding ground truth. For each group, we also show close-ups of one ore two local
regions to demonstrate the effectiveness of our method for segmenting small objects. The cyan ground points in (g) are not
manually annotated but extracted using the ground extraction module described in Sec. 3.1 as pseudo labels. |
|
|
Figure 3: Qualitative comparison of our models under different configurations. (a) Input point cloud. (b) Vanilla model. (c)
Vanilla + Class-Balanced Loss (CBL). (d) Vanilla + CBL + Hard Attention. (e) Vanilla + CBL + Soft Attention. (f) Ground Truth.
|
|
Acknowledgements: |
This work was supported by the National Key R&D Program of China under Grant 2017YFB1300201, the National Natural Science Foundation of China (NSFC) under Grants 61632006, 61622211, and 61620106009, as well as the Fundamental Research Funds for the Central Universities under Grants WK3490000003 and WK2100100030. Jianbo Jiao is supported by the EPSRC Programme Grant Seebibyte EP/M013774/1. This work was partially conducted when Jian Wu was an intern at MoonX.AI.
|
|
Main References: |
[1] Loic Landrieu and Martin Simonovsky. 2018. Large-scale point cloud semantic segmentation with superpoint graphs. In Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR). 4558–4567.
[2] Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. 2018. PointCNN: Convolution On X-Transformed Points. In Proceedings of the International Conference on Neural Information Processing Systems. 820–830.
[3] R Qi Charles, Hao Su, Mo Kaichun, and Leonidas Guibas. 2017. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In Proceedings of the International Conference on Neural Information Processing Systems. 5105–5114.
[4] Bichen Wu, Alvin Wan, Xiangyu Yue, and Kurt Keutzer. 2018. SqueezeSeg: Convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3D lidar point cloud. In IEEE International Conference on Robotics and Automation. IEEE, 1887–1893.
|
|
BibTex: |
@inproceedings{Wu2019ground-aware, author =
{Wu, Jian and Jiao, Jianbo and Yang, Qingxiong and Zha, Zheng-Jun and Chen, Xuejin}, title = {Ground-Aware Point Cloud Semantic Segmentation for Autonomous Driving},
conference={ACM Multimedia}, year = {2019} } |
|
Downloads: |
Disclaimer: The paper listed on this page is
copyright-protected. By clicking on the paper link below, you
confirm that you or your institution have the right to access the
corresponding pdf file. |
|
|
Copyright © 2019 GCL ,
USTC |
|